Multi-core in the Source Engine

Written by Wil Harris

November 2, 2006 | 04:18

Tags: #benchmark #core-2-duo #core-2-quad #directx-10 #download #gabe-newell #half-life-2 #kentsfield #multi-core

Companies: #intel #valve

Why go multi-threaded / multi-core?

Having gotten some ideas of the mechanics behind the Source engine's transition to multi-core, we were keen to find out what had really motivated this move. Yes, there's new hardware arriving in the form of Kentsfield, but other development houses are not moving as quickly as Valve is when it comes to embracing the technology.

Gabe Newell has a fairly easy answer. "If we're right, other people are going to take a long time to get the multi-threaded versions of their engines out. If we're right with the approach we've taken - which is to iterate and build on top of Source - we can get there a couple of years ahead of where they could be." In other words, Source can be at the cutting edge of engine technology, which makes it very attractive both to consumers and licensees.

"It makes performance more of a software problem in this generation of gaming than a hardware problem. That's not really where we'd like it to be, but that's the reality. But with this investment we're making into the Source engine, we really think that there are games that we can build that other companies that don't make this investment won't be able to build."

Multi-core in the Source Engine The future Multi-core in the Source Engine The future
Jay Stelly, a senior engineer at Valve, adds: "In the past couple of years, graphics have really outstripped the rest of the game. We can make people look pretty photo-real, but we can't make them behave like real people. This is what multi-threading is about achieving."

The future of multi-core

In the near future, multi-core is about moving from dual-core processors to quad-core processors. Valve is clearly excited about the new Intel chip. Gabe Newell gives us details on his system buying policy - "We've been holding off buying any new systems here at work. Everything we're buying now, anything that needs replacing in the future - we're replacing it with Kentsfield."

How about AMD's 4x4 platform, which uses two FX processors for effective quad core - and its native quad-core part coming out next year? AMD has made a lot of noise about how its native quad-core part is going to be so much faster than Kentsfield, which uses a dual-core, dual-die design. Is that noise really warranted? Tom tells us: "Depending on how you choose to thread things, you could bring out performance differences across different quad-core architectures. But we're really trying to avoid that by working out the correct levels of granularity." Chris Green adds: "We've tried to avoid worst case performance in places like that. To be honest, we're more concerned with main memory bandwidth than the type of core interconnect." So, whilst there could be theoretical differences in performance between native and non-native quad-core, don't expect to see massive differences in practice.

Multi-core in the Source Engine The future Multi-core in the Source Engine The future
It's also about add-in cards - GPUs want to do more multi-purpose work, and physics accelerators want to take on some of that load. By spreading physics and other such applications across CPU cores, is Valve shying away from using dedicated physics acceleration cards? Jay tells us: "The stuff we're doing here is not multi-threading the physics beyond the coarse approach, really. However, once we look at larger architectural changes, doing something with dedicated physics cards is not out of the question. But right now, it's a case of the adoption rate of that being pretty low." In other words, if physics cards take off, we might see acceleration in Source v2 - but don't hold your breath.

But the chance of physics cards taking off is pretty minimal if we draw inferences from the views of Newell on hardware. He talks enthusiastically of the "Post-GPU world" - rather like the one envisaged by AMD and ATI with their Fusion project. In this world, we see a number of homogenous CPU cores all tasked with different projects - including graphics rendering. This allows for more flexibility when it comes to splitting up workloads, and means that engine-functions such as AI and physics can become a more integral part of the gaming experience because of the scalability such an architecture adds. "All of a sudden," raves Gabe, "If your AI isn't running fast enough, you can lower your graphics resolution. That's some awesome flexibility."
Discuss this in the forums
YouTube logo
MSI MPG Velox 100R Chassis Review

October 14 2021 | 15:04

TOP STORIES

SUGGESTED FOR YOU